Serveur d'exploration Santé et pratique musicale

Attention, ce site est en cours de développement !
Attention, site généré par des moyens informatiques à partir de corpus bruts.
Les informations ne sont donc pas validées.

Auditory rhyme processing in expert freestyle rap lyricists and novices: An ERP study.

Identifieur interne : 000620 ( Main/Exploration ); précédent : 000619; suivant : 000621

Auditory rhyme processing in expert freestyle rap lyricists and novices: An ERP study.

Auteurs : Keith Cross [États-Unis] ; Takako Fujioka [États-Unis]

Source :

RBID : pubmed:30951740

Descripteurs français

English descriptors

Abstract

Music and language processing share and sometimes compete for brain resources. An extreme case of such shared processing occurs in improvised rap music, in which performers, or 'lyricists', combine rhyming, rhythmic, and semantic structures of language with musical rhythm, harmony, and phrasing to create integrally meaningful musical expressions. We used event-related potentials (ERPs) to investigate how auditory rhyme sequence processing differed between expert lyricists and non-lyricists. Participants listened to rhythmically presented pseudo-word triplets each of which terminated in a full-rhyme (e.g., STEEK, PREEK; FLEEK), half-rhyme (e.g., STEEK, PREEK; FREET), or non-rhyme (e.g., STEEK, PREEK; YAME), then judged each sequence in its aesthetic (Do you 'like' the rhyme?) or technical (Is the rhyme 'perfect'?) aspect. Phonological N450 showed rhyming effects between conditions (i.e., non vs. full; half vs. full; non vs. half) similarly across groups in parietal electrodes. However, concurrent activity in frontocentral electrodes showed left-laterality in non-lyricists, but not lyricists. Furthermore, non-lyricists' responses to the three conditions were distinct in morphology and amplitude at left-hemisphere electrodes with no condition difference at right-hemisphere electrodes, while lyricists' responses to half-rhymes they deemed unsatisfactory were similar to full-rhyme at left-hemisphere electrodes, and similar to non-rhyme at right-hemisphere electrodes. The CNV response observed while waiting for the second and third pseudo-word in the sequence was more enhanced to aesthetic rhyme judgments tasks than to technical rhyme judgment tasks in non-lyricists, suggesting their investment of greater effort for aesthetic rhyme judgments. No task effects were observed in lyricists, suggesting that aesthetic and technical rhyme judgments may engage the same processes for experts. Overall, our findings suggest that extensive practice of improvised lyricism may uniquely encourage the neuroplasticity of integrated linguistic and musical feature processing in the brain.

DOI: 10.1016/j.neuropsychologia.2019.03.022
PubMed: 30951740


Affiliations:


Links toward previous steps (curation, corpus...)


Le document en format XML

<record>
<TEI>
<teiHeader>
<fileDesc>
<titleStmt>
<title xml:lang="en">Auditory rhyme processing in expert freestyle rap lyricists and novices: An ERP study.</title>
<author>
<name sortKey="Cross, Keith" sort="Cross, Keith" uniqKey="Cross K" first="Keith" last="Cross">Keith Cross</name>
<affiliation wicri:level="2">
<nlm:affiliation>Curriculum Studies Department, College of Education, University of Hawai`i at Mānoa, Honolulu, HI, USA. Electronic address: kcross2@hawaii.edu.</nlm:affiliation>
<country xml:lang="fr">États-Unis</country>
<wicri:regionArea>Curriculum Studies Department, College of Education, University of Hawai`i at Mānoa, Honolulu, HI</wicri:regionArea>
<placeName>
<region type="state">Hawaï</region>
</placeName>
</affiliation>
</author>
<author>
<name sortKey="Fujioka, Takako" sort="Fujioka, Takako" uniqKey="Fujioka T" first="Takako" last="Fujioka">Takako Fujioka</name>
<affiliation wicri:level="4">
<nlm:affiliation>Centre for Computer Research in Music and Acoustics, Department of Music, Stanford University, Stanford, CA, USA; Wu Tsai Neurosciences Institute, Stanford University, Stanford, CA, USA.</nlm:affiliation>
<country xml:lang="fr">États-Unis</country>
<wicri:regionArea>Centre for Computer Research in Music and Acoustics, Department of Music, Stanford University, Stanford, CA, USA; Wu Tsai Neurosciences Institute, Stanford University, Stanford, CA</wicri:regionArea>
<placeName>
<region type="state">Californie</region>
<settlement type="city">Stanford (Californie)</settlement>
</placeName>
<orgName type="university">Université Stanford</orgName>
</affiliation>
</author>
</titleStmt>
<publicationStmt>
<idno type="wicri:source">PubMed</idno>
<date when="2019">2019</date>
<idno type="RBID">pubmed:30951740</idno>
<idno type="pmid">30951740</idno>
<idno type="doi">10.1016/j.neuropsychologia.2019.03.022</idno>
<idno type="wicri:Area/Main/Corpus">000549</idno>
<idno type="wicri:explorRef" wicri:stream="Main" wicri:step="Corpus" wicri:corpus="PubMed">000549</idno>
<idno type="wicri:Area/Main/Curation">000549</idno>
<idno type="wicri:explorRef" wicri:stream="Main" wicri:step="Curation">000549</idno>
<idno type="wicri:Area/Main/Exploration">000549</idno>
</publicationStmt>
<sourceDesc>
<biblStruct>
<analytic>
<title xml:lang="en">Auditory rhyme processing in expert freestyle rap lyricists and novices: An ERP study.</title>
<author>
<name sortKey="Cross, Keith" sort="Cross, Keith" uniqKey="Cross K" first="Keith" last="Cross">Keith Cross</name>
<affiliation wicri:level="2">
<nlm:affiliation>Curriculum Studies Department, College of Education, University of Hawai`i at Mānoa, Honolulu, HI, USA. Electronic address: kcross2@hawaii.edu.</nlm:affiliation>
<country xml:lang="fr">États-Unis</country>
<wicri:regionArea>Curriculum Studies Department, College of Education, University of Hawai`i at Mānoa, Honolulu, HI</wicri:regionArea>
<placeName>
<region type="state">Hawaï</region>
</placeName>
</affiliation>
</author>
<author>
<name sortKey="Fujioka, Takako" sort="Fujioka, Takako" uniqKey="Fujioka T" first="Takako" last="Fujioka">Takako Fujioka</name>
<affiliation wicri:level="4">
<nlm:affiliation>Centre for Computer Research in Music and Acoustics, Department of Music, Stanford University, Stanford, CA, USA; Wu Tsai Neurosciences Institute, Stanford University, Stanford, CA, USA.</nlm:affiliation>
<country xml:lang="fr">États-Unis</country>
<wicri:regionArea>Centre for Computer Research in Music and Acoustics, Department of Music, Stanford University, Stanford, CA, USA; Wu Tsai Neurosciences Institute, Stanford University, Stanford, CA</wicri:regionArea>
<placeName>
<region type="state">Californie</region>
<settlement type="city">Stanford (Californie)</settlement>
</placeName>
<orgName type="university">Université Stanford</orgName>
</affiliation>
</author>
</analytic>
<series>
<title level="j">Neuropsychologia</title>
<idno type="eISSN">1873-3514</idno>
<imprint>
<date when="2019" type="published">2019</date>
</imprint>
</series>
</biblStruct>
</sourceDesc>
</fileDesc>
<profileDesc>
<textClass>
<keywords scheme="KwdEn" xml:lang="en">
<term>Adult (MeSH)</term>
<term>Auditory Perception (MeSH)</term>
<term>Contingent Negative Variation (MeSH)</term>
<term>Electroencephalography (MeSH)</term>
<term>Esthetics (MeSH)</term>
<term>Evoked Potentials (MeSH)</term>
<term>Humans (MeSH)</term>
<term>Judgment (MeSH)</term>
<term>Language (MeSH)</term>
<term>Music (MeSH)</term>
<term>Neuronal Plasticity (MeSH)</term>
</keywords>
<keywords scheme="KwdFr" xml:lang="fr">
<term>Adulte (MeSH)</term>
<term>Esthétique (MeSH)</term>
<term>Humains (MeSH)</term>
<term>Jugement (MeSH)</term>
<term>Langage (MeSH)</term>
<term>Musique (MeSH)</term>
<term>Perception auditive (MeSH)</term>
<term>Plasticité neuronale (MeSH)</term>
<term>Potentiels évoqués (MeSH)</term>
<term>Variation contingente négative (MeSH)</term>
<term>Électroencéphalographie (MeSH)</term>
</keywords>
<keywords scheme="MESH" xml:lang="en">
<term>Adult</term>
<term>Auditory Perception</term>
<term>Contingent Negative Variation</term>
<term>Electroencephalography</term>
<term>Esthetics</term>
<term>Evoked Potentials</term>
<term>Humans</term>
<term>Judgment</term>
<term>Language</term>
<term>Music</term>
<term>Neuronal Plasticity</term>
</keywords>
<keywords scheme="MESH" xml:lang="fr">
<term>Adulte</term>
<term>Esthétique</term>
<term>Humains</term>
<term>Jugement</term>
<term>Langage</term>
<term>Musique</term>
<term>Perception auditive</term>
<term>Plasticité neuronale</term>
<term>Potentiels évoqués</term>
<term>Variation contingente négative</term>
<term>Électroencéphalographie</term>
</keywords>
</textClass>
</profileDesc>
</teiHeader>
<front>
<div type="abstract" xml:lang="en">Music and language processing share and sometimes compete for brain resources. An extreme case of such shared processing occurs in improvised rap music, in which performers, or 'lyricists', combine rhyming, rhythmic, and semantic structures of language with musical rhythm, harmony, and phrasing to create integrally meaningful musical expressions. We used event-related potentials (ERPs) to investigate how auditory rhyme sequence processing differed between expert lyricists and non-lyricists. Participants listened to rhythmically presented pseudo-word triplets each of which terminated in a full-rhyme (e.g., STEEK, PREEK; FLEEK), half-rhyme (e.g., STEEK, PREEK; FREET), or non-rhyme (e.g., STEEK, PREEK; YAME), then judged each sequence in its aesthetic (Do you 'like' the rhyme?) or technical (Is the rhyme 'perfect'?) aspect. Phonological N450 showed rhyming effects between conditions (i.e., non vs. full; half vs. full; non vs. half) similarly across groups in parietal electrodes. However, concurrent activity in frontocentral electrodes showed left-laterality in non-lyricists, but not lyricists. Furthermore, non-lyricists' responses to the three conditions were distinct in morphology and amplitude at left-hemisphere electrodes with no condition difference at right-hemisphere electrodes, while lyricists' responses to half-rhymes they deemed unsatisfactory were similar to full-rhyme at left-hemisphere electrodes, and similar to non-rhyme at right-hemisphere electrodes. The CNV response observed while waiting for the second and third pseudo-word in the sequence was more enhanced to aesthetic rhyme judgments tasks than to technical rhyme judgment tasks in non-lyricists, suggesting their investment of greater effort for aesthetic rhyme judgments. No task effects were observed in lyricists, suggesting that aesthetic and technical rhyme judgments may engage the same processes for experts. Overall, our findings suggest that extensive practice of improvised lyricism may uniquely encourage the neuroplasticity of integrated linguistic and musical feature processing in the brain.</div>
</front>
</TEI>
<pubmed>
<MedlineCitation Status="MEDLINE" Owner="NLM">
<PMID Version="1">30951740</PMID>
<DateCompleted>
<Year>2020</Year>
<Month>07</Month>
<Day>13</Day>
</DateCompleted>
<DateRevised>
<Year>2020</Year>
<Month>07</Month>
<Day>13</Day>
</DateRevised>
<Article PubModel="Print-Electronic">
<Journal>
<ISSN IssnType="Electronic">1873-3514</ISSN>
<JournalIssue CitedMedium="Internet">
<Volume>129</Volume>
<PubDate>
<Year>2019</Year>
<Month>06</Month>
</PubDate>
</JournalIssue>
<Title>Neuropsychologia</Title>
<ISOAbbreviation>Neuropsychologia</ISOAbbreviation>
</Journal>
<ArticleTitle>Auditory rhyme processing in expert freestyle rap lyricists and novices: An ERP study.</ArticleTitle>
<Pagination>
<MedlinePgn>223-235</MedlinePgn>
</Pagination>
<ELocationID EIdType="pii" ValidYN="Y">S0028-3932(19)30077-6</ELocationID>
<ELocationID EIdType="doi" ValidYN="Y">10.1016/j.neuropsychologia.2019.03.022</ELocationID>
<Abstract>
<AbstractText>Music and language processing share and sometimes compete for brain resources. An extreme case of such shared processing occurs in improvised rap music, in which performers, or 'lyricists', combine rhyming, rhythmic, and semantic structures of language with musical rhythm, harmony, and phrasing to create integrally meaningful musical expressions. We used event-related potentials (ERPs) to investigate how auditory rhyme sequence processing differed between expert lyricists and non-lyricists. Participants listened to rhythmically presented pseudo-word triplets each of which terminated in a full-rhyme (e.g., STEEK, PREEK; FLEEK), half-rhyme (e.g., STEEK, PREEK; FREET), or non-rhyme (e.g., STEEK, PREEK; YAME), then judged each sequence in its aesthetic (Do you 'like' the rhyme?) or technical (Is the rhyme 'perfect'?) aspect. Phonological N450 showed rhyming effects between conditions (i.e., non vs. full; half vs. full; non vs. half) similarly across groups in parietal electrodes. However, concurrent activity in frontocentral electrodes showed left-laterality in non-lyricists, but not lyricists. Furthermore, non-lyricists' responses to the three conditions were distinct in morphology and amplitude at left-hemisphere electrodes with no condition difference at right-hemisphere electrodes, while lyricists' responses to half-rhymes they deemed unsatisfactory were similar to full-rhyme at left-hemisphere electrodes, and similar to non-rhyme at right-hemisphere electrodes. The CNV response observed while waiting for the second and third pseudo-word in the sequence was more enhanced to aesthetic rhyme judgments tasks than to technical rhyme judgment tasks in non-lyricists, suggesting their investment of greater effort for aesthetic rhyme judgments. No task effects were observed in lyricists, suggesting that aesthetic and technical rhyme judgments may engage the same processes for experts. Overall, our findings suggest that extensive practice of improvised lyricism may uniquely encourage the neuroplasticity of integrated linguistic and musical feature processing in the brain.</AbstractText>
<CopyrightInformation>Published by Elsevier Ltd.</CopyrightInformation>
</Abstract>
<AuthorList CompleteYN="Y">
<Author ValidYN="Y">
<LastName>Cross</LastName>
<ForeName>Keith</ForeName>
<Initials>K</Initials>
<AffiliationInfo>
<Affiliation>Curriculum Studies Department, College of Education, University of Hawai`i at Mānoa, Honolulu, HI, USA. Electronic address: kcross2@hawaii.edu.</Affiliation>
</AffiliationInfo>
</Author>
<Author ValidYN="Y">
<LastName>Fujioka</LastName>
<ForeName>Takako</ForeName>
<Initials>T</Initials>
<AffiliationInfo>
<Affiliation>Centre for Computer Research in Music and Acoustics, Department of Music, Stanford University, Stanford, CA, USA; Wu Tsai Neurosciences Institute, Stanford University, Stanford, CA, USA.</Affiliation>
</AffiliationInfo>
</Author>
</AuthorList>
<Language>eng</Language>
<PublicationTypeList>
<PublicationType UI="D016428">Journal Article</PublicationType>
<PublicationType UI="D013485">Research Support, Non-U.S. Gov't</PublicationType>
</PublicationTypeList>
<ArticleDate DateType="Electronic">
<Year>2019</Year>
<Month>04</Month>
<Day>03</Day>
</ArticleDate>
</Article>
<MedlineJournalInfo>
<Country>England</Country>
<MedlineTA>Neuropsychologia</MedlineTA>
<NlmUniqueID>0020713</NlmUniqueID>
<ISSNLinking>0028-3932</ISSNLinking>
</MedlineJournalInfo>
<CitationSubset>IM</CitationSubset>
<MeshHeadingList>
<MeshHeading>
<DescriptorName UI="D000328" MajorTopicYN="N">Adult</DescriptorName>
</MeshHeading>
<MeshHeading>
<DescriptorName UI="D001307" MajorTopicYN="Y">Auditory Perception</DescriptorName>
</MeshHeading>
<MeshHeading>
<DescriptorName UI="D003265" MajorTopicYN="N">Contingent Negative Variation</DescriptorName>
</MeshHeading>
<MeshHeading>
<DescriptorName UI="D004569" MajorTopicYN="N">Electroencephalography</DescriptorName>
</MeshHeading>
<MeshHeading>
<DescriptorName UI="D004954" MajorTopicYN="N">Esthetics</DescriptorName>
</MeshHeading>
<MeshHeading>
<DescriptorName UI="D005071" MajorTopicYN="Y">Evoked Potentials</DescriptorName>
</MeshHeading>
<MeshHeading>
<DescriptorName UI="D006801" MajorTopicYN="N">Humans</DescriptorName>
</MeshHeading>
<MeshHeading>
<DescriptorName UI="D007600" MajorTopicYN="Y">Judgment</DescriptorName>
</MeshHeading>
<MeshHeading>
<DescriptorName UI="D007802" MajorTopicYN="Y">Language</DescriptorName>
</MeshHeading>
<MeshHeading>
<DescriptorName UI="D009146" MajorTopicYN="Y">Music</DescriptorName>
</MeshHeading>
<MeshHeading>
<DescriptorName UI="D009473" MajorTopicYN="N">Neuronal Plasticity</DescriptorName>
</MeshHeading>
</MeshHeadingList>
<KeywordList Owner="NOTNLM">
<Keyword MajorTopicYN="Y">Hip-Hop</Keyword>
<Keyword MajorTopicYN="Y">Improvisation</Keyword>
<Keyword MajorTopicYN="Y">Language</Keyword>
<Keyword MajorTopicYN="Y">Music</Keyword>
<Keyword MajorTopicYN="Y">Plasticity</Keyword>
<Keyword MajorTopicYN="Y">Syntax</Keyword>
</KeywordList>
</MedlineCitation>
<PubmedData>
<History>
<PubMedPubDate PubStatus="received">
<Year>2018</Year>
<Month>06</Month>
<Day>09</Day>
</PubMedPubDate>
<PubMedPubDate PubStatus="revised">
<Year>2019</Year>
<Month>02</Month>
<Day>02</Day>
</PubMedPubDate>
<PubMedPubDate PubStatus="accepted">
<Year>2019</Year>
<Month>03</Month>
<Day>28</Day>
</PubMedPubDate>
<PubMedPubDate PubStatus="pubmed">
<Year>2019</Year>
<Month>4</Month>
<Day>6</Day>
<Hour>6</Hour>
<Minute>0</Minute>
</PubMedPubDate>
<PubMedPubDate PubStatus="medline">
<Year>2020</Year>
<Month>7</Month>
<Day>14</Day>
<Hour>6</Hour>
<Minute>0</Minute>
</PubMedPubDate>
<PubMedPubDate PubStatus="entrez">
<Year>2019</Year>
<Month>4</Month>
<Day>6</Day>
<Hour>6</Hour>
<Minute>0</Minute>
</PubMedPubDate>
</History>
<PublicationStatus>ppublish</PublicationStatus>
<ArticleIdList>
<ArticleId IdType="pubmed">30951740</ArticleId>
<ArticleId IdType="pii">S0028-3932(19)30077-6</ArticleId>
<ArticleId IdType="doi">10.1016/j.neuropsychologia.2019.03.022</ArticleId>
</ArticleIdList>
</PubmedData>
</pubmed>
<affiliations>
<list>
<country>
<li>États-Unis</li>
</country>
<region>
<li>Californie</li>
<li>Hawaï</li>
</region>
<settlement>
<li>Stanford (Californie)</li>
</settlement>
<orgName>
<li>Université Stanford</li>
</orgName>
</list>
<tree>
<country name="États-Unis">
<region name="Hawaï">
<name sortKey="Cross, Keith" sort="Cross, Keith" uniqKey="Cross K" first="Keith" last="Cross">Keith Cross</name>
</region>
<name sortKey="Fujioka, Takako" sort="Fujioka, Takako" uniqKey="Fujioka T" first="Takako" last="Fujioka">Takako Fujioka</name>
</country>
</tree>
</affiliations>
</record>

Pour manipuler ce document sous Unix (Dilib)

EXPLOR_STEP=$WICRI_ROOT/Sante/explor/SanteMusiqueV1/Data/Main/Exploration
HfdSelect -h $EXPLOR_STEP/biblio.hfd -nk 000620 | SxmlIndent | more

Ou

HfdSelect -h $EXPLOR_AREA/Data/Main/Exploration/biblio.hfd -nk 000620 | SxmlIndent | more

Pour mettre un lien sur cette page dans le réseau Wicri

{{Explor lien
   |wiki=    Sante
   |area=    SanteMusiqueV1
   |flux=    Main
   |étape=   Exploration
   |type=    RBID
   |clé=     pubmed:30951740
   |texte=   Auditory rhyme processing in expert freestyle rap lyricists and novices: An ERP study.
}}

Pour générer des pages wiki

HfdIndexSelect -h $EXPLOR_AREA/Data/Main/Exploration/RBID.i   -Sk "pubmed:30951740" \
       | HfdSelect -Kh $EXPLOR_AREA/Data/Main/Exploration/biblio.hfd   \
       | NlmPubMed2Wicri -a SanteMusiqueV1 

Wicri

This area was generated with Dilib version V0.6.38.
Data generation: Mon Mar 8 15:23:44 2021. Site generation: Mon Mar 8 15:23:58 2021